344 research outputs found

    The six-point remainder function to all loop orders in the multi-Regge limit

    Get PDF
    We present an all-orders formula for the six-point amplitude of planar maximally supersymmetric N=4 Yang-Mills theory in the leading-logarithmic approximation of multi-Regge kinematics. In the MHV helicity configuration, our results agree with an integral formula of Lipatov and Prygarin through at least 14 loops. A differential equation linking the MHV and NMHV helicity configurations has a natural action in the space of functions relevant to this problem---the single-valued harmonic polylogarithms introduced by Brown. These functions depend on a single complex variable and its conjugate, w and w*, which are quadratically related to the original kinematic variables. We investigate the all-orders formula in the near-collinear limit, which is approached as |w|->0. Up to power-suppressed terms, the resulting expansion may be organized by powers of log|w|. The leading term of this expansion agrees with the all-orders double-leading-logarithmic approximation of Bartels, Lipatov, and Prygarin. The explicit form for the sub-leading powers of log|w| is given in terms of modified Bessel functions.Comment: 25 pages, 1 figur

    Impression management and formation on Facebook: A lens model approach

    Get PDF
    To extend research on online impression formation and warranting theory, the present investigation reports a Brunswick lens model analysis of Facebook profiles. Facebook users’ (N = 100) personality (i.e., extraversion, agreeableness, conscientiousness, neuroticism, openness) was self-reported. Facebook users’ profiles were then content analyzed for the presence and rate of 53 cues. Observers (N = 35), who were strangers to profile owners, estimated profile owner personality. Results indicate that observers could accurately estimate profile owners’ extraversion, agreeableness, and conscientiousness. For all personality traits except neuroticism, unique profile cues were diagnostic warrants of personality (i.e., indicative of profile owner personality and used to estimate personality by strangers). The results are discussed in relation to warranting theory, impression formation, and lens model research

    Easy over Hard: A Case Study on Deep Learning

    Full text link
    While deep learning is an exciting new technique, the benefits of this method need to be assessed with respect to its computational cost. This is particularly important for deep learning since these learners need hours (to weeks) to train the model. Such long training time limits the ability of (a)~a researcher to test the stability of their conclusion via repeated runs with different random seeds; and (b)~other researchers to repeat, improve, or even refute that original work. For example, recently, deep learning was used to find which questions in the Stack Overflow programmer discussion forum can be linked together. That deep learning system took 14 hours to execute. We show here that applying a very simple optimizer called DE to fine tune SVM, it can achieve similar (and sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84 times faster hours than deep learning method. We offer these results as a cautionary tale to the software analytics community and suggest that not every new innovation should be applied without critical analysis. If researchers deploy some new and expensive process, that work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201

    sk_p: a neural program corrector for MOOCs

    Get PDF
    We present a novel technique for automatic program correction in MOOCs, capable of fixing both syntactic and semantic errors without manual, problem specific correction strategies. Given an incorrect student program, it generates candidate programs from a distribution of likely corrections, and checks each candidate for correctness against a test suite. The key observation is that in MOOCs many programs share similar code fragments, and the seq2seq neural network model, used in the natural-language processing task of machine translation, can be modified and trained to recover these fragments. Experiment shows our scheme can correct 29% of all incorrect submissions and out-performs state of the art approach which requires manual, problem specific correction strategies

    Bootstrapping six-gluon scattering in planar N=4{\cal N}=4 super-Yang-Mills theory

    Full text link
    We describe the hexagon function bootstrap for solving for six-gluon scattering amplitudes in the large NcN_c limit of N=4{\cal N}=4 super-Yang-Mills theory. In this method, an ansatz for the finite part of these amplitudes is constrained at the level of amplitudes, not integrands, using boundary information. In the near-collinear limit, the dual picture of the amplitudes as Wilson loops leads to an operator product expansion which has been solved using integrability by Basso, Sever and Vieira. Factorization of the amplitudes in the multi-Regge limit provides additional boundary data. This bootstrap has been applied successfully through four loops for the maximally helicity violating (MHV) configuration of gluon helicities, and through three loops for the non-MHV case.Comment: 15 pages, 3 figures, 2 tables; contribution to the proceedings of Loops and Legs in Quantum Field Theory, 27 April - 2 May 2014, Weimar, Germany; v2, reference adde

    Second-order regression models exhibit progressive sharpening to the edge of stability

    Full text link
    Recent studies of gradient descent with large step sizes have shown that there is often a regime with an initial increase in the largest eigenvalue of the loss Hessian (progressive sharpening), followed by a stabilization of the eigenvalue near the maximum value which allows convergence (edge of stability). These phenomena are intrinsically non-linear and do not happen for models in the constant Neural Tangent Kernel (NTK) regime, for which the predictive function is approximately linear in the parameters. As such, we consider the next simplest class of predictive models, namely those that are quadratic in the parameters, which we call second-order regression models. For quadratic objectives in two dimensions, we prove that this second-order regression model exhibits progressive sharpening of the NTK eigenvalue towards a value that differs slightly from the edge of stability, which we explicitly compute. In higher dimensions, the model generically shows similar behavior, even without the specific structure of a neural network, suggesting that progressive sharpening and edge-of-stability behavior aren't unique features of neural networks, and could be a more general property of discrete learning algorithms in high-dimensional non-linear models
    • …
    corecore